-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
maxwellHA on zookeeper #1948
base: master
Are you sure you want to change the base?
maxwellHA on zookeeper #1948
Conversation
the docs need work but I can do some rewrites when this is ready to merge. thanks for the contribution! |
Hey, @wanghangyu817! Thank you for your contribution! We're thinking about different options for coordinating multiple instances of Maxwell in our infrastructure (mostly happens during k8s cluster failovers, etc) and using Zookeeper feels like a good option for us. I'm planning to try it out over the next couple of days and will report if we see any issues. |
lock.lock(); | ||
LOGGER.warn("node:" + hostAddress + " lost leader"); | ||
LOGGER.warn("master-slave switchover......"); | ||
LOGGER.warn("The leadership went from " + hostAddress + " to " + leader.getLeader()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this case shut down the current maxwell process given that it has lost the leadership status?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand what you mean
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I mean by the time we get this call, it means the current Maxwell process has lost leadership. That means we should probably stop the process altogether or shut down the replicator and go back into the election mode waiting for our turn once again.
Logging a warn does nothing and means we are going to keep replicator threads running and pumping duplicate data into whatever producer is configured. Additionally, the positions store will start getting conflicting writes from two different processes.
@osheroff Do I understand it correctly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand. I have tested this point. When I try to kill the process, the code will go straight to this location and print the results we want, and also print the node information of the next leader. If there are other exceptions that make it impossible to execute this code, please tell me and I will solve it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll leave it to Ben to make a call on what should happen in this case, but I feel simply shutting down the process gracefully may be the easiest way to avoid conflicts after a leadership loss. Alternatively, it may be possible to call maxwell.terminate();
the way the JGroups-based HA implementation does it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my opinion, when the leader
becomes a follower
, it only means that the zookeeper connection is abnormal or the current node does not meet the conditions of the leader. During this period, the program does not care about maxwell connector status (of course, maxwell status monitoring can be added later). It's a simple switch from one node to another. If the zookeeper connection problem causes the master/slave switchover, the program will not quit, but become a follower
. Next, I will add maxwell to the indicator monitoring in iterations to determine whether to switch the master/slave according to these indicators, which requires Ben @osheroff guidance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
During this period, the program does not care about maxwell connector status
If we have lost ZK connection and, consequently, lost the leader status, then the current Maxwell instance will at the very least start producing duplicate events since the other instance that is the leader now is replicating the same set of changes already. Additionally, there is a chance of both instances overwriting each other's position information in maxwell's database, which AFAIU can have negative consequences as well.
If the zookeeper connection problem causes the master/slave switchover, the program will not quit, but become a follower
What do you feel a should it mean for a Maxwell instance to become a follower? (AFAIU, there is no notion of a follower mode in current maxwell codebase)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example,if I start three maxwell instances,we label them as 1,2,3, where 1 is the leader and 2,3 are the followers. When 1 is the leader,2,3 is just a daemon process that doesn't do anything. When 1 exit the leader(not the exit caused by maxwell,but the server failure:For example,exit caused by restart,memory overflow,disk space, etc.),then 2 or 3 takes over from 1 to continue the collection task. If the maxwell process caused by mysql exits,no matter how many instances are started, the problem still presists. This is not something that can be fixed by high availability. What I need to do is to ensure that maxwell itself is highly available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your solution absolutely solves the scenarios described. One thing I feel it is missing (or I may be confused!) is a scenario when a leader, doing its leader stuff, replicating data data, etc, loses its leadership while remaining alive and seemingly healthy (due to ZK connectivity issues, ZK restart, any other issues that force a new election). In those cases the old leader needs to step down and stop doing its usual leader things and move to a quiet follower mode (stop binlog replicator, don't write into the position store anymore, etc).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have tested the scenario you described and got corresponding results. Please let me know if you have any other problems
#1946
zookeeper is used to implement maxwell's high availability mode.
Sorry, I modified high_availability.md and added the usage mode. If it does not conform to the document specification, I will roll it back.